Training Neural Networks
Back to Home
01. Instructor
02. Training Optimization
03. Testing
04. Overfitting and Underfitting
05. Early Stopping
06. Regularization
07. Regularization 2
08. Dropout
09. Local Minima
10. Random Restart
11. Vanishing Gradient
12. Other Activation Functions
13. Batch vs Stochastic Gradient Descent
14. Learning Rate Decay
15. Momentum
16. Error Functions Around the World
Back to Home
14. Learning Rate Decay
Learning Rate
Next Concept